Mental chronometry is the scientific study of processing speed or reaction time on cognitive tasks to infer the content, duration, and temporal sequencing of mental operations. Reaction time (RT; also referred to as " response time") is measured by the elapsed time between stimulus onset and an individual's response on elementary cognitive tasks (ECTs), which are relatively simple perceptual-motor tasks typically administered in a laboratory setting. Mental chronometry is one of the core methodological paradigms of human experimental, cognitive, and differential psychology, but is also commonly analyzed in psychophysiology, cognitive neuroscience, and behavioral neuroscience to help elucidate the biological mechanisms underlying perception, attention, and decision-making in humans and other species.
Mental chronometry uses measurements of elapsed time between sensory stimulus onsets and subsequent behavioral responses to study the time course of information processing in the nervous system. Distributional characteristics of response times such as means and variance are considered useful indices of processing speed and efficiency, indicating how fast an individual can execute task-relevant mental operations. Behavioral responses are typically button presses, but eye movements, vocal responses, and other observable behaviors are often used. Reaction time is thought to be constrained by the speed of signal transmission in white matter as well as the processing efficiency of Neocortex gray matter.
The use of mental chronometry in psychological research is far ranging, encompassing nomothetic models of information processing in the human auditory and visual systems, as well as differential psychology topics such as the role of individual differences in RT in human cognitive ability, aging, and a variety of clinical and psychiatric outcomes. The experimental approach to mental chronometry includes topics such as the empirical study of vocal and manual latencies, visual and auditory attention, temporal judgment and integration, language and reading, movement time and motor response, perceptual and decision time, memory, and subjective time perception.
The first documentation of human reaction time as a scientific variable would come several centuries later, from practical concerns that arose in the field of astronomy. In 1820, German astronomer Friedrich Bessel applied himself to the problem of accuracy in recording stellar transits, which was typically done by using the ticking of a metronome to estimate the time at which a star passed the hairline of a telescope. Bessel noticed timing discrepancies under this method between records of multiple astronomers, and sought to improve accuracy by taking these individual differences in timing into account. This led various astronomers to seek out ways to minimize these differences between individuals, which came to be known as the "personal equation" of astronomical timing. This phenomenon was explored in detail by English statistician Karl Pearson, who designed one of the first apparatuses to measure it.
Purely psychological inquiries into the nature of reaction time came about in the mid-1850s. Psychology as a quantitative, experimental science has historically been considered as principally divided into two disciplines: Experimental and differential psychology. The scientific study of mental chronometry, one of the earliest developments in scientific psychology, has taken on a microcosm of this division as early as the mid-1800s, when scientists such as Hermann von Helmholtz and Wilhelm Wundt designed reaction time tasks to attempt to measure the speed of neural transmission. Wundt, for example, conducted experiments to test whether emotional provocations affected pulse and breathing rate using a kymograph.
Sir Francis Galton is typically credited as the founder of differential psychology, which seeks to determine and explain the mental differences between individuals. He was the first to use rigorous RT tests with the express intention of determining averages and ranges of individual differences in mental and behavioral traits in humans. Galton hypothesized that differences in intelligence would be reflected in variation of sensory discrimination and speed of response to stimuli, and he built various machines to test different measures of this, including RT to visual and auditory stimuli. His tests involved a selection of over 10,000 men, women and children from the London public.
Welford (1980) notes that the historical study of human reaction times were broadly concerned with five distinct classes of research problems, some of which evolved into paradigms that are still in use today. These domains are broadly described as sensory factors, response characteristics, preparation, choice, and conscious accompaniments.
where represents stimulus intensity, represents a reducible time value, represents an irreducible time value, and represents a variable exponent that differs across senses and conditions. This formulation reflects the observation that reaction time will decrease as stimulus intensity increases down to the constant , which represents a theoretical lower limit below which human physiology cannot meaningfully operate.
The effects of stimulus intensity on reducing RTs was found to be relative rather than absolute in the early 1930s. One of the first observations of this phenomenon comes from the research of Carl Hovland, who demonstrated with a series of candles placed at different focal distances that the effects of stimulus intensity on RT depended on previous level of adaptation.
In addition to stimulus intensity, varying stimulus strength (that is, "amount" of stimulus available to the sensory apparatus per unit time) can also be achieved by increasing both the area and duration of the presented stimulus in an RT task. This effect was documented in early research for response times to sense of taste by varying the area over taste buds for detection of a taste stimulus, and for the size of visual stimuli as amount of area in the visual field. Similarly, increasing the duration of a stimulus available in a reaction time task was found to produce slightly faster reaction times to visual and auditory stimuli, though these effects tend to be small and are largely consequent of the sensitivity to sensory receptors.
Anticipatory muscle tension is another physiological factor that early researchers found as a predictor of response times, wherein muscle tension is interpreted as an index of cortical arousal level. That is, if physiological arousal state is high upon stimulus onset, greater preexisting muscular tension facilitates faster responses; if arousal is low, weaker muscle tension predicts slower response. However, too much arousal (and therefore muscle tension) was also found to negatively affect performance on RT tasks as a consequence of an impaired signal-to-noise ratio.
As with many sensory manipulations, such physiological response characteristics as predictors of RT operate largely outside of central processing, which differentiates these effects from those of preparation, discussed below.
This relationship can be summarized in simple terms by the equation:
where and are constants related to the task and denotes the probability of a stimulus appearing at any given time.
In simple RT tasks, constant foreperiods of about 300 ms over a series of trials tends to produce the fastest responses for a given individual, and responses lengthen as the foreperiod becomes longer, an effect that has been demonstrated up to foreperiods of many hundreds of seconds. Foreperiods of variable interval, if presented in equal frequency but in random order, tend to produce slower RTs when the intervals are shorter than the mean of the series, and can be faster or slower when greater than the mean. Whether held constant or variable, foreperiods of less than 300 ms may produce delayed RTs because processing of the warning may not have had time to complete before the stimulus arrives. This type of delay has significant implications for the question of serially-organized central processing, a complex topic that has received much empirical attention in the century following this foundational work.
The first scientist to recognize the importance of response options on RT was Franciscus Donders (1869). Donders found that simple RT is shorter than recognition RT, and that choice RT is longer than both. Donders also devised a subtraction method to analyze the time it took for mental operations to take place. By subtracting simple RT from choice RT, for example, it is possible to calculate how much time is needed to make the connection. This method provides a way to investigate the cognitive processes underlying simple perceptual-motor tasks, and formed the basis of subsequent developments. (Original work published in 1868.)
Although Donders' work paved the way for future research in mental chronometry tests, it was not without its drawbacks. His insertion method, often referred to as "pure insertion", was based on the assumption that inserting a particular complicating requirement into an RT paradigm would not affect the other components of the test. This assumption—that the incremental effect on RT was strictly additive—was not able to hold up to later experimental tests, which showed that the insertions were able to interact with other portions of the RT paradigm. Despite this, Donders' theories are still of interest and his ideas are still used in certain areas of psychology, which now have the statistical tools to use them more accurately.
Response time on chronometric tasks are typically concerned with five categories of measurement: Central tendency of response time across a number of individual trials for a given person or task condition, usually captured by the arithmetic mean but occasionally by the median and less commonly the mode; intraindividual variability, the variation in individual responses within or across conditions of a task; Skewness, a measure of the asymmetry of reaction time distributions across trials; slope, the difference between mean RTs across tasks of different type or complexity; and accuracy or error rate, the proportion of correct responses for a given person or task condition.
Human response times on simple reaction time tasks are usually on the order of 200 ms. The processes that occur during this brief time enable the brain to perceive the surrounding environment, identify an object of interest, decide an action in response to the object, and issue a motor command to execute the movement. These processes span the domains of perception and movement, and involve perceptual decision making and motor planning. Many researchers consider the lower limit of a valid response time trial to be somewhere between 100 and 200 ms, which can be considered the bare minimum of time needed for physiological processes such as stimulus perception and for motor responses. Responses faster than this often result from an "anticipatory response", wherein the person's motor response has already been programmed and is in progress before the onset of the stimulus, and likely do not reflect the process of interest.
One reason for variability that extends the right tail of an individual's RT distribution is momentary lapses. To improve the reliability of individual response times, researchers typically require a subject to perform multiple trials, from which a measure of the 'typical' or baseline response time can be calculated. Taking the mean of the raw response time is rarely an effective method of characterizing the typical response time, and alternative approaches (such as modeling the entire response time distribution) are often more appropriate.
A number of different approaches have been developed to analyze RT measurements, particularly in how to effectively deal with issues that arise from trimming outliers, data transformations, measurement reliability speed-accuracy tradeoffs, mixture models, convolution models, stochastic orders related comparisons, and the mathematical modeling of stochastic variation in timed responses.
where and are constants representing the intercept and slope of the function, and is the number of alternatives. Hick's Law at Encyclopedia.com Originally from Colman, A. (2001). A Dictionary of Psychology. Retrieved 28 February 2009. The Jensen Box is a more recent application of Hick's law. Hick's law has interesting modern applications in marketing, where restaurant menus and web interfaces (among other things) take advantage of its principles in striving to achieve speed and ease of use for the consumer.
This model and its variants account for these distributional features by partitioning a reaction time trial into a non-decision residual stage and a stochastic "diffusion" stage, where the actual response decision is generated. The distribution of reaction times across trials is determined by the rate at which evidence accumulates in neurons with an underlying "random walk" component. The drift rate ( v) is the average rate at which this evidence accumulates in the presence of this random noise. The decision threshold ( a) represents the width of the decision boundary, or the amount of evidence needed before a response is made. The trial terminates when the accumulating evidence reaches either the correct or the incorrect boundary.
The mean RTs for sprinters at the Beijing Olympics were 166 ms for males and 169 ms for females, but in one out of 1,000 starts they can achieve 109 ms and 121 ms, respectively. This study also concluded that longer female RTs can be an artifact of the measurement method used, suggesting that the starting block sensor system might overlook a female false-start due to insufficient pressure on the pads. The authors suggested compensating for this threshold would improve false-start detection accuracy with female runners.
The IAAF has a controversial rule that if an athlete moves in less than 100 ms, it counts as a false start, and since 2009, they must be disqualified – even despite an IAAF-commissioned study in 2009 that indicated top sprinters are able to sometimes react in 80–85 ms.
In a classic example of a simultaneous discrimination RT paradigm, conceived by social psychologist Leon Festinger, two vertical lines of differing lengths are shown side-by-side to participants simultaneously. Participants are asked to identify as quickly as possible whether the line on the right is longer or shorter than the line on the left. One of these lines would retain a constant length across trials, while the other took on a range of 15 different values, each one presented an equal number of times across the session.
An example of the second type of discrimination paradigm, which administers stimuli successfully or serially, is a classic 1963 study in which participants are given two sequentially lifted weights and asked to judge whether the second was heavier or lighter than the first.
The third broad type of discrimination RT task, wherein stimuli are administered continuously, is exemplified by a 1955 experiment in which participants are asked to sort packs of shuffled playing cards into two piles depending on whether the card had a large or small number of dots on its back. Reaction time in such a task is often measured by the total amount of time it takes to complete the task.
CRT tasks can be highly variable. They can involve stimuli of any sensory modality, most typically of visual or auditory nature, and require responses that are typically indicated by pressing a key or button. For example, the subject might be asked to press one button if a red light appears and a different button if a yellow light appears. The Jensen box is an example of an instrument designed to measure choice RT with visual stimuli and keypress response. Response criteria can also be in the form of vocalizations, such as the original version of the Stroop task, where participants are instructed to read the names of words printed in colored ink from lists. Modern versions of the Stroop task, which use single stimulus pairs for each trial, are also examples of a multi-choice CRT paradigm with vocal responding.
Models of choice reaction time are closely aligned with Hick's Law, which posits that average reaction times lengthen as a function of more available choices. Hick's law can be reformulated as:
where denotes mean RT across trials, is a constant, and represents the sum of possibilities including "no signal". This accounts for the fact that in a choice task, the subject must not only make a choice but also first detect whether a signal has occurred at all (equivalent to in the original formulation).
With the advent of the functional neuroimaging techniques of PET and fMRI, psychologists started to modify their mental chronometry paradigms for functional imaging. Although psycho(physio)logists have been using electroencephalographic measurements for decades, the images obtained with PET have attracted great interest from other branches of neuroscience, popularizing mental chronometry among a wider range of scientists in recent years. The way that mental chronometry is utilized is by performing RT based tasks which show through neuroimaging the parts of the brain which are involved in the cognitive process.
With the invention of functional magnetic resonance imaging (fMRI), techniques were used to measure activity through electrical event-related potentials in a study when subjects were asked to identify if a digit that was presented was above or below five. According to Sternberg's additive theory, each of the stages involved in performing this task includes: encoding, comparing against the stored representation for five, selecting a response, and then checking for error in the response. The fMRI image presents the specific locations where these stages are occurring in the brain while performing this simple mental chronometry task.
In the 1980s, neuroimaging experiments allowed researchers to detect the activity in localized brain areas by injecting radionuclides and using positron emission tomography (PET) to detect them. Also, fMRI was used which have detected the precise brain areas that are active during mental chronometry tasks. Many studies have shown that there is a small number of brain areas which are widely spread out which are involved in performing these cognitive tasks.
Current medical reviews indicate that signaling through the originating in the ventral tegmental area is strongly positively correlated with improved (shortened) RT; e.g., dopaminergic pharmaceuticals like amphetamine have been shown to expedite responses during interval timing, while dopamine antagonists (specifically, for D2-type receptors) produce the opposite effect. Similarly, age-related loss of dopamine from the striatum, as measured by SPECT imaging of the dopamine transporter, strongly correlates with slowed RT.
The distinction between this experimental approach and the use of chronometric tools to investigate individual differences is more conceptual than practical, and many modern researchers integrate tools, theories and models from both areas to investigate psychological phenomena. Nevertheless, it is a useful organizing principle to distinguish the two areas in terms of their research questions and the purposes for which a number of chronometric tasks were devised. The experimental approach to mental chronometry has been used to investigate a variety of cognitive systems and functions that are common to all humans, including memory, language processing and production, attention, and aspects of visual and auditory perception. The following is a brief overview of several well-known experimental tasks in mental chronometry.
The physical match task was the most simple; subjects had to encode the letters, compare them to each other, and make a decision. When doing the name match task subjects were forced to add a cognitive step before making a decision: they had to search memory for the names of the letters, and then compare those before deciding. In the rule based task they had to also categorize the letters as either vowels or consonants before making their choice. The time taken to perform the rule match task was longer than the name match task which was longer than the physical match task. Using the subtraction method experimenters were able to determine the approximate amount of time that it took for subjects to perform each of the cognitive processes associated with each of these tasks.
Empirical research into the nature of the relationship between reaction time and measures of intelligence has a long history of study that dates back to the early 1900s, with some early researchers reporting a near-perfect correlation in a sample of five students. The first review of these incipient studies, in 1933, analyzed over two dozen studies and found a smaller but reliable association between measures of intelligence and the production of faster responses on a variety of RT tasks.
Up through the beginning of the 21st century, psychologists studying reaction time and intelligence continued to find such associations, but were largely unable to agree about the true size of the association between reaction time and psychometric intelligence in the general population. This is likely due to the fact that the majority of samples studied had been selected from universities and had unusually high mental ability scores relative to the general population. In 2001, psychologist Ian J. Deary published the first large-scale study of intelligence and reaction time in a representative population sample across a range of ages, finding a correlation between psychometric intelligence and simple reaction time of –0.31 and four-choice reaction time of –0.49.
In 2016, a genome-wide association study (GWAS) of cognitive function found 36 genome-wide significant genetic variants associated with reaction time in a sample of around 95,000 individuals. These variants were found to span two regions on chromosome 2 and chromosome 12, which appear to be in or near involved in spermatogenesis and signaling activities by cytokine and growth factor receptors, respectively. This study additionally found significant genetic correlations between RT, memory, and verbal-numerical reasoning.
Neurophysiological research using event-related potentials (ERPs) have used P3 latency as a correlate of the "decision" stage of a reaction time task. These studies have generally found that the magnitude of the association between g and P3 latency increases with more demanding task conditions. Measures of P3 latency have also been found to be consistent with the worst performance rule, wherein the correlation between P3 latency quantile mean and cognitive assessment scores becomes more strongly negative with increasing quantile. Other ERP studies have found consilience with the interpretation of the g-RT relationship residing chiefly in the "decision" component of a task, wherein most of the g-related brain activity occurs following stimulation evaluation but before motor response, while components involved in sensory processing change little across differences in g.
During senescence, RT deteriorates (as does fluid intelligence), and this deterioration is systematically associated with changes in many other cognitive processes, such as executive functions, working memory, and inferential processes. In the theory of Andreas Demetriou,
A 2014 study measured choice RT in a sample of 63 high and 63 low Extraversion participants, and found that higher levels of Extraversion were associated with faster responses. Although the authors note this is likely a function of specific task demands rather than underlying individual differences, other authors have proposed the RT-Extraversion relationship as representing individual differences in motor response, which may be mediated by dopamine. However, these studies are difficult to interpret in light of their small samples and have yet to be replicated.
In a similar vein, other researchers have found a small ( r < 0.20) association between RT and Neuroticism, wherein more neurotic individuals tended to be slower at RT tasks. The authors interpret this as reflecting a higher arousal threshold in response to stimuli of varying intensity, speculating that higher Neuroticism individuals may have relatively "weak" nervous systems. In a somewhat larger study of 242 college undergraduates, Neuroticism was found to be more substantially correlated ( r ≈ 0.25) with response variability, with higher Neuroticism associated with greater RT standard deviations. The authors speculate that Neuroticism may confer greater variance in reaction time through the interference of "mental noise."
History and early observations
Sensory factors
Strength of stimulus
Sensory modality
Response characteristics
Preparation
Choice
Conscious accompaniments
Measurement and mathematical descriptions
Distribution of response times
Hick's law
Drift-diffusion model
Standard reaction time paradigms
Simple RT paradigms
Recognition or go/no-go paradigms
Discrimination paradigms
Choice RT paradigms
Application in biological psychology/cognitive neuroscience
Reaction time as a function of experimental conditions
Sternberg's memory-scanning task
Shepard and Metzler's mental rotation task
Sentence-picture verification
Models of memory
Posner's letter matching studies
Reaction time as a function of individual differences
Cognitive ability
Mechanistic properties of the RT-cognitive ability relationship
Biological and neurophysiological manifestations of the RT-g relationship
Diffusion modeling of RT and cognitive ability
Cognitive development
Health and mortality
Big-Five personality traits
Reaction time as a function of different analytical choices
See also
Further reading
External links
|
|